(2 min)
This is a tutorial and survey paper on Karush-Kuhn-Tucker (KKT) conditions,
first-order and second-order numerical optimization, and distributed
optimization. After a brief review of history of optimization, we start with
some preliminaries on properties of sets, norms, functions, and concepts of
optimization. Then, we introduce the optimization problem, standard
optimization problems (including linear programming, quadratic programming, and
semidefinite programming), and convex problems. We also introduce some
techniques such as eliminating inequality, equality, and set constraints,
adding slack variables, and epigraph form. We introduce Lagrangian function,
dual variables, KKT conditions (including primal feasibility, dual feasibility,
weak and strong duality, complementary slackness, and stationarity condition),
and solving optimization by method of Lagrange multipliers. Then, we cover
first-order optimization including gradient descent, line-search, convergence
of gradient methods, momentum, steepest descent, and backpropagation. Other
first-order methods are explained, such as accelerated gradient method,
stochastic gradient descent, mini-batch gradient descent, stochastic average
gradient, stochastic variance reduced gradient, AdaGrad, RMSProp, and Adam
optimizer, proximal methods (including proximal mapping, proximal point
algorithm, and proximal gradient method), and constrained gradient methods
(including projected gradient method, projection onto convex sets, and
Frank-Wolfe method). We also cover non-smooth and $\ell_1$ optimization methods
including lasso regularization, convex conjugate, Huber function,
soft-thresholding, coordinate descent, and subgradient methods. Then, we
explain second-order methods including Newton's method for unconstrained,
equality constrained, and inequality constrained problems....